perm filename REVIEW[W88,JMC]4 blob sn#859854 filedate 1988-07-25 generic text, type C, neo UTF8
COMMENT āŠ—   VALID 00005 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	%review[w88,jmc]		Review of The Question of Artificial Intelligence by Bloomfield
C00008 00003	\input memo.tex[let,jmc]
C00045 00004	\smallskip\centerline{Copyright \copyright\ \number\year\ by John McCarthy}
C00046 00005
C00054 ENDMK
CāŠ—;
%review[w88,jmc]		Review of The Question of Artificial Intelligence by Bloomfield
\input memo.tex[let,jmc]
\noindent{\it THE QUESTION OF ARTIFICIAL INTELLIGENCE},

\noindent Brian Bloomfield, ed., Croom Helm, London, New York,  Sydney.

	This book is of a genre that treats a scientific field
using various social science and humanistic disciplines,
e.g. philosophy, history, sociology, psychology and politics.  Scientists
often complain about the results, both generally (judging the whole effort
as wasted) and specifically (citing instances of ignorance and
misunderstanding).  I'm open minded about the general activity;
maybe the sociology of research in AI has independent intellectual
interest, though surely less than that of AI itself, and maybe sociological
observations might cause participants in the field to change the way they
do something, e.g. recognize achievement, define authority and distribute
rewards.  This review mainly concerns specific matters, and is mainly negative,
complaining about ignorance and prejudice.  The review also contains
some suggestions about how this kind of thing can be done better ---
assuming it is to be done at all.

	The successive chapters are entitled
``AI at the Crossroads'' by S. G. Shanker  dealing with philosophy,
 ``The Culture of AI'' by B. P. Bloomfield,
 ``Development and Establishment in AI'' by J. Fleck,
 ``Frames of AI'' by J. Schopman,
``Involvement, Detachment and Programming: The Belief in PROLOG''
by P. Leith and
``Expert Systems, AI and the Behavioural Co-ordinates of Skill'' by
H. M. Collins.

	``AI at the Crossroads'' suggests entitling this review
``Some Philosophers at a Crossroads''.  Shanker's path
from the crossroads would lead to epistemology and the philosophy
of mind leaving philosophy entirely.  AI programs require knowledge and
belief and their construction requires their formalization and scientific
study.  Shanker ignores this area in which philosophers and AI researchers
have begun to co-operate and compete.  Instead he considers the idea of
artificial intelligence to be a ``category error'' of some almost
unintelligible sort.

	To someone engaged in AI research, it seems odd that for all his
denunciation of AI,
it isn't clear whether Shanker argues that there is any
particular activity in which the external performance of computer programs must
 remain inferior to that of humans.
It seems likely that he isn't making such a claim.  Instead, much
of what he says seems to be just an extreme demand that different levels
of organization not be related in the same explanation.  The most
striking example of this is ``$\ldots$ the psychologist can have
no recourse to neural nets in order to explain, for example,
the results of `reaction time studies' ''.

	Shanker's 124 notes include no reference to the last 30
years of technical
literature of AI, e.g. no textbook, no articles in {\it Artificial
Intelligence} and no papers in the proceedings of the International
Joint Conferences on AI.  This permits him to invent the subject.

	Thus he invents and criticizes an ideology of AI in which
what a computer program knows is identified with the measure of
information introduced by Claude Shannon in 1948.  I wasn't aware
that I or any significant AI pioneer made that identification, and
it finally occurred to me to check whether even Shannon did.  He
didn't.  His 1950 paper ``Programming a Computer for Playing Chess''
cited in Shanker's article never mentions information.

	While AI can only bandy words with Shanker and people
of similar views, we have serious business with many other philosophers.
An intelligent program must have a general view of the world into which
facts about particular situations fit.  It must have views about how
knowledge is obtained and verified.  It must be able to represent
facts about the effects of actions.  It must have some idea of what
choices are available to itself and other intelligences.  This overlap
in subject matter between AI and philosophy has led to increasing
interaction.

	Examples of philosophical work relevant to AI
(besides mathemtical logic) include the work of
Frege (sense and denotation),
G\"odel (modern mathematical Platonism),
Tarski (theory of truth),
Quine (ontology and bound variables),
Putnam (natural kinds),
Hintikka (formalization of facts about knowledge),
Montague (paradoxes of intensionality),
Kripke (semantics of modality),
Gettier (examples on intensionality),
Grice (conversational implicatures),
and
Searle (performatives).  However, all these topics need to be
treated more modestly (in scope) and more formally and precisely than is usually
done in philosophy.  In addition to the aid AI has received
from the above, we should also mention the encouragement
received from Daniel Dennett.

	In exchange, I believe that AI's concrete approach to
epistemology will greatly affect philosophy.  Indeed philosophers,
e.g. Hintikka, and mathematical logicians are already studying
the formalization of nonmonotonic reasoning, a topic originated
in AI.

	``The Culture of AI'' argues that the ideas put forth by AI
researchers (and scientists generally) should not be discussed
independently of the culture that developed them.  We don't agree with
this, but have no objection to also discussing the culture.  A rather
extreme example of considering culture is favorably cited by Bloomfield,
namely Athanasiou's

\item{}``The culture of AI is imperialist and seeks to expand the kingdom of
the machine $\ldots$.  The AI community is well organized and well
funded, and its culture fits its dreams: it has high priests, its
greedy businessmen, its canny politicians.  The U.S. Department of
Defense is behind it all the way.  And like the communists of old,
AI scientists believe in their revolution; the old myths of tragic
hubris don't trouble them at all''.

	It's rather hard to get down to discussing declarative vs.
procedural representations or combinatorial explosion after such bombast.
Moreover, whether current expert system technology is capable of writing
useful programmed assistants for American express authorizers, general
medical practitioners, ``barefoot doctors'' in China, district attorneys
or Navy captains is an objective question, and it doesn't seem that
Bloomfield intends to help answer it.

	We can't tell whether there is much to say about how the AI
cultural milieu influenced its ideas, because Bloomfield's information
about the AI culture is third hand.  There is no sign that he talked to AI
students or researchers himself.  Instead he cites the books by Joseph
Weizenbaum and Sherry Turkle.  Weizenbaum dislikes the M.I.T. hackers
AI and otherwise, and confuses hackers with researchers; these
groups only partly overlap.  Turkle at least did some well prepared
interviewing of both hackers and researchers.  However, she doesn't make
much of a case that the ideas stemmed from the culture per se.  Indeed the
originators of many of the ideas were and are non participants in the
informal culture of the AI laboratories.  It occurs to me that since most
of what we know about Socrates's ideas comes from Plato, perhaps the
authors of this volume consider it unfair to use primary sources even in
studying the activities of people alive and active today.

	``Development and Establishment in AI'' contains a lot of
administrative history of AI research institutions and their government
support.  The information about Britain is moderately voluminous and seems
more or less accurate, and the paper contains almost all the references to
actual AI literature that occur in the volume.  Its American history is
less accurate.  There was no ``Automata Studies'' conference held in 1952.
The volume of that title was composed of papers solicited by mail.  The
Dartmouth Summer Project on Artificial Intelligence was not a ``summer
school'', i.e. the participants were not divided, even informally, into
lecturers and students.  The Newell-Simon group began its activities about
two years before the Dartmouth conference.  It is indeed true that the
pioneers of AI in the U.S. met each other early, formed research groups
that made continued contributions, and became authorities in the field.
It's hard to see how it could have been otherwise.  A fuller picture would
also mention also-rans in the history of AI, people whose
ideas did not meet with success or acceptance and dropped out.

	The ``AI establishment'' owes
little to the general ``scientific establishment''.  AI would have
developed much more slowly in the U.S. if we had had to persuade the
general run of physicists, mathematicians, biologists, psychologists or
electrical engineers on advisory committees to allow substantial NSF money
to be allocated to AI research.  Moreover, the approaches to intelligence
originated by Minsky, Newell, Simon and myself were quite different from
those advocated by Norbert Wiener, John von Neumann or Warren McCulloch.
Some of us carefully avoided the name ``cybernetics'' in order to keep the
distinction clear.

	Our good fortune with ARPA is due to its creation with new money
at a time when we were ready to ask for support and very substantially to
the psychologist J. C. R. Licklider.  Licklider was on the Air Force
Scientific Advisory Board around 1960 and argued that large command and
control systems were being built with no support for the relevant basic
science.  ARPA responded by offering to create an office and budget for
such support if Licklider would agree to head it.  AI was one of the
computer science areas Licklider and his successors at DARPA consider relevant
to Defense Department problems.  The scientific establishment was only
minimally, if at all, consulted.
In contrast European AI research long depended on crumbs left by the more
established sciences.  Recent PhDs were unable to initiate the research,
and the European heads of laboratories still tend to be older people with
existing reputations in other fields.

	We make a final remark about the Lighthill report which initiated
one of the dry periods in British AI funding.  When a
physicist is forced to think about AI he generally reinvents the subject
in his individual way.  Some expect it to be easy and others impossible.
Lighthill was in the latter category.  In the 1974 BBC debate, I thought
I had a powerful argument and asked Lighthill why, if the physicists
hadn't mastered turbulence in 100 years, they should expect AI
researchers to give up just because they hadn't mastered AI in 20.
Lighthill's reply, which BBC unfortunately didn't include in the
broadcast, was that the physicists should give up on turbulence.
Hardly any physicists would agree with Lighthill's statement and
maybe he didn't mean it.

	Despite the deficiencies indicated above, the paper shows that
attention to detail does pay off in useful information about history.

	``Frames of Artificial Intelligence'' by J. Schopman purports ``to
sketch a close-up of a crucial moment in the history of Artificial
Intelligence (AI), the moment of its genesis in 1956''.  Schopman begins
by telling us that ``an exposition will be given of the investigative
method used, SCOST ---  the `Social construction of science and technology'.''
The crucial moment is stated to be the Dartmouth Summer Research Project
on Artificial Intelligence except that Schopman refers to it as a conference
and mixes it up with the {\it Automata Studies} collection of papers.  The
papers for that collection were solicited starting in 1952, and the volume
was finally published in 1956.  The Dartmouth project did not result in
a publication.

	Whatever the SCOST method includes, it evidently doesn't include
either interviewing the participants in the activity (almost all of whom
are still alive and active) or looking for contemporary documents.  The
contrast with Herbert Stoyan's work on the history of the LISP programming
language is amazing.  Stoyan started his work while still living in East
Germany and unable to travel.  Nevertheless, he wrote to everyone involved
in early LISP work, collected all the documents anyone would copy for him
and was able to confront what people told him in letters and interviews
(after he was allowed to emigrate) with what the early documents said.
He eventually came to know more about LISP's early history than any
individual participant.  If Schopman or anyone else wants to know what
we had in mind when we proposed the Dartmouth study, he should obtain
a copy of the proposal.  If he wants to know why the Rockefeller
Foundation gave us the \$7500, he could begin by asking them if anyone
there wrote a memorandum at the time justifying the support.

	Old proposals and old granting-agency memoranda documenting their
support decisions are an important unused tool in the recent history of
science.  The proposals often say in ways unrecorded in published papers
what the researcher was hoping to accomplish, and the support memoranda
tell what the agency thought it was accomplishing.  Old referees' reports
on papers submitted for publication and proposal evaluations provide
another useful source.  Were there referees' reports on Einstein's 1905
papers?  In the U.S.A., the Freedom of Information Act provides an
important way of find out what people in Government thought they were
doing.

	Now let's return to Schopman's actual speculations about what
people were doing.  He says that the Dartmouth ``conference'' was ``a result
of the choices made by a group of people who were dissatisfied with the
then-prevailing scientific way of studying human behaviour.  They
considered their approach as radically different, a revolution ---
the so-called `cognitive revolution'.''  Schopman has made all that up ---
or copied it from journalists who made it up.

	The proposal for the Dartmouth conference, as I remember having
written it, contains no criticism of anybody's way of studying human
behavior, because I didn't consider it relevant.  As suggested by the term
``artificial intelligence'' we weren't considering human behavior except
as a clue to possible effective ways of doing tasks.  The only
participants interested in human behavior were Newell and Simon, and they
didn't discuss it much in that forum.  There were no lectures on human
behavior at Dartmouth that summer.  Also, as far as I remember, the phrase
`cognitive revolution' came into use at least ten years later.

	For this reason, whatever revolution there may have been around
the time of the Dartmouth Project was to get entirely away from studying
human behavior and to consider the computer as a tool for solving certain
classes of problems.  Thus AI was
created as a branch of computer science and not as a branch of
psychology.  Newell and Simon continued to be interested in both
AI as computer science and AI as psychology, but they were somewhat
exceptional in this.

	Schopman mentions many influences of earlier work on AI
pioneers.  I can report that many of them didn't influence
me except negatively, but in order to settle the matter of influences
it would be necessary to actually ask (say) Minsky and Newell and
Simon.  As for myself, one of the reasons for inventing the term
``artificial intelligence'' was to escape association with ``cybernetics''.
Its concentration on analog feedback seemed misguided, and I wished
to avoid having either to accept Norbert (not Robert) Wiener as
a guru or having to argue with him.  (By the way I assume that
the ``Walter Gibbs'' Schopman refers to as having influenced Wiener
is either the turn-of-the-century American physicist Josiah Willard Gibbs
or McCulloch's colleague Walter Pitts).  Minsky tells me that neither
Wiener nor von Neumann, with whom he had personal contact, influenced
him, because he didn't agree with their ideas.  He does mention
influence from Rashevsky, McCulloch and Pitts.

	Schopman paints a picture of the intellectual situation in 1956
based on the publications of many people who wrote before that year.
Maybe that was the intellectual situation for many, but I suspect the
situation was more fragmented than that; many people hadn't read the
papers Schopman identifies as influential.  For example, the idea that
programming computers rather than building machines was the key to AI
received its first public emphasis at the Dartmouth meeting. None of von Neumann
(surprisingly), Wiener, McCulloch, Ashby and MacKay thought in those
terms.  However, by the time of Dartmouth, Newell and Simon, Samuel and
Bernstein had already written programs.  McCarthy and Minsky expressed
their 1956 ideas as proposals for programs, although their earlier work
had not assumed programmable computers.

	However, Alan Turing had already made the point that AI was a
matter of programming computers in his 1950 article ``Computing Machinery
and Intelligence'' in the British philosophy journal {\it Mind}.  When I
asked (maybe in 1979) in a historical panel who had read Turing's paper
early in his AI work, I got negative answers.  The paper only became well
known after James R. Newman reprinted it in his 1956 {\it The World of
Mathematics}.  Actual influences depend on what is actually read.  A
diligent historian of science could check what papers were referred to.

	Finally, there is Schopman's chart that associates AI frames
(paradigms) with periods.  In no way did these ``paradigms'' dominate
work in the periods considered.  There were, however, substantial shifts
in emphasis at various times since the Dartmouth conference.  Someone
studying this will need to subdivide the AI ``paradigm'' in order
to say which ``subparadigms'' were popular at different times.  One
way to study this would be to classify PhD theses and IJCAI
papers and count them.  (Even sociologists can count).

	``Involvement, Detachment and Programming: The Belief in Prolog''
by Philip Leith treats the enthusiasm for Prolog as a sociological
phenomenon analogous to the 16th century Ramist movement in the logic and
rhetoric of law.  The Britannica article on rhetoric says the Ramist
movement emphasized
figures of speech.  I wasn't convinced that this has much analogy to Prolog.
Leith's complaint that Kowalski's work on expressing the British
Nationality Act in logic programming was supported by the wrong Research
Council leads this American to speculate that purely British
 quarrels about money and
turf are being reflected; Americans should discreetly tiptoe from the room.
  At a 1987 conference on AI and law, the Kowalski
work was referred to respectfully by both the computer scientists and the
lawyers present.

	``Expert Systems, Artificial Intelligence and the Behavioural
Co-ordinates of Skill'' by H. M. Collins, a sociologist, is the paper
admitting the most straightforward response.  Collins classifies expert
systems into four levels beginning with computerization of a rule book,
followed by the incorporation of heuristics obtained by interviewing
experts but used by humans only as an adviser, followed by expert
systems acting autonomously and finally by systems with common sense.
This seems like a useful classification along one dimension.

	He also has nice examples.  One concerns a referee's decision
when one side in cricket inadvertently had an extra man on the field
during an ``over'', and the fact wasn't noticed till much later.  In
deciding what to do the referee had to go beyond the rule book.
Presumably he took at least the following considerations into account:
his intuitive concept of fairness, the probable perceptions of fairness
by the players, the spectators and his fellow officials, the need to
keep the game going, maintaining the authority of the officiating system
 and the need to reach a prompt decision.  All these
considerations involve the referee's common sense and refereeing experience.
None of them are in the rules of cricket, although some may be in
books about refereeing or in a handbook for cricket referees.
An AI system with human refereeing capability would need
general common sense knowledge and reasoning ability.  Collins's
intuition and that of the other authors in this collection is that
this is not possible.

	AI has to take such examples as challenges.  Should we be stumped, we
should admit it for the time being and promise to tackle the problem
later.  However, I don't feel stumped by the cricket referee
problem.  I agree with Collins that the
solution doesn't lie in simple extensions to the cricket rule book.
This would indeed require an impractical or even impossible
 number of rules.  However, the formalization of common sense
is leading to ideas like formalized context with nonmonotonic rules
about how contexts might be extended.  These are discussed in
(McCarthy 1979, 1986, 1987).  These approaches are just beginning.
They took a long time to reach the concreteness required even to
write papers and still may not work.

	However, it is not justified for philosophers or sociologists to
claim to have shown that common sense can't be formalized.  (The chief
sinner in this respect was Wittgenstein).  If you want to show something
is impossible you have to prove theorems, as did Boltzmann (with
thermodynamics), G\"odel and Turing.  Then you must be careful not to go
beyond what the theorems say in your intuitive exposition.

	Philosophers, etc. are entitled to their negative intuitions, but
they should try to concretize them.  For example, let them try to devise
the easiest task that they think computers can't do.  If they are willing
to read current papers, they can be even more useful.  They can try to
devise the easiest problem the current AI methods can't do.
\bigskip

\noindent{\it REFERENCES}

\noindent McCarthy, John (1979): ``First Order Theories of Individual 
Concepts and Propositions'', 
in Michie, Donald (ed.) {\it Machine Intelligence 9}, (University of
Edinburgh Press, Edinburgh).
%  .<<aim 325, concep.tex[e76,jmc]>>

\noindent McCarthy, John (1986):
``Applications of Circumscription to Formalizing Common Sense Knowledge''
{\it Artificial Intelligence}, April 1986
%  circum.tex[f83,jmc]

\noindent McCarthy, John (1987):
``Generality in Artificial Intelligence'', {\it Communications of the ACM}.
Vol. 30, No. 12, pp. 1030-1035
% genera[w86,jmc]

\noindent McCulloch, W and Pitts, W. (1943): ``A logical calculus of the
ideas imminent in nervous activity''.{\it Bulletin of Mathematical Biophysics}, 
5, 115-137.

\noindent Shannon, C.(1950): ``Programming a computer for playing chess''.
{\it Philosophical Magazine}, 41.

\noindent Turing, A.M. (1950): ``Computing machinery and intelligence''. 
{\it Mind}, 59, 433-60.

\noindent Weiner, N. (1948): Cybernetics. New York, Wiley.
\bigskip

\halign{#\hfil\cr
John McCarthy\cr
Computer Science Department\cr
Stanford University\cr
Stanford, California  94305\cr}
\bye
\smallskip\centerline{Copyright \copyright\ \number\year\ by John McCarthy}
\smallskip\noindent{This draft of REVIEW[W88,JMC]\ TEXed on \jmcdate\ at \theTime}
\vfill\eject\end

Notes:

Does Shanker assert the impossibility of a a specific performance?
Is there some biological or psychological understanding that he
denies?

It is a pity that Shanker ignores the areas in which AI research
overlaps philosophy and in which AI research has profited from
the work of philosophers.  Consider the question of how knowledge
differs from true belief.
Gettier examples.
Frege
Positivism as a doctrine concerning how to construct intelligent
programs.

Philosophers at the crossroads.  Will they use their 2,000 years of study
of important questions or will they stand on the sidelines and
see epistemology detached from philosophy?

Some will do one thing.  Others will do another.

We don't want to eliminate philosophy from the science of AI ---
at least not so long as there is some hope of getting philosophers
to do some of the work.

knowing that, knowing how, knowing what, knowing whether,
knowing about, all I know is

19 - as measured by the spreading influence of the paradigm.  No, by
the fact that a modern chess program can beat S. G. Shanker.

21 - ``It shows that now factual claims have been made.''


queries to look up
Does Kowalski refer to McCarty, so that Leith ought to have noticed?
The baleful influence of Tony Battista.

Even Shannon didn't refer to information theory in his one article
on AI --- the paper on chess.

ALLEN.NEWELL@A.CS.CMU.EDU
history request
In a review of a bad book, The Question of Artificial Intelligence,
I plan to make some remarks about the institutional history of AI including
something like the following.

"AI didn't start as an establishment.  Minsky and I were fortunate that the MIT
Research Laboratory of Electronics had a ``joint services contract'' that
permitted its head, Jerome Wiesner, to say yes instantly when we
encountered him in the hall in May 1958 and asked for a secretary, a key
punch, and two programmers.  He countered by asking if we wanted to
supervise six graduate students in mathematics whom he had committed
himself to support but had no immediate job for.  I believe that the
Newell and Simon work at Rand started in a similar informal way."

Is my conjecture correct or was there a formal proposal to begin your
and Herb's work on complex information processing?

Shannon, C. E., "Programming a digital computer for playing chess,"
Phil. Mag., 1950, 41, 356-375.

Leith seems to be holding up one end of an unseemly squabble over
money.  See p. 241.

The alliance among Michie, Longuet-Higgins and Gregory with
Meltzer standing to one side was doomed to fail from the start.
There was never enough scientific compatibility, except possibly
between Meltzer and Michie, where there wasn't personal compabibility.

In discussing the ups and downs of the funding of AI in the U.S.
and the associated changes of emphasis, one should not neglect to
inquire into the views of the successive directors of DARPA and
also into the long term baleful influence of Anthony Battista.

With the rest of these guys, hangers on and parasites on AI,
we can only bandy words, but with the philosophers AI has
serious business.

Americans should tiptoe away gently, this stuff about the
wrong Research Council isn't meant for our ears.

***

	AI has many problems in common with philosophy.  If a robot
is to act intelligently in the common sense world, then
we need to give it a general framework in which to put particular
knowledge.  Creating such a framework involves some solution, however
naive, to problems on which philosophers also work.  Some of the

	For one thing the book seems to suffer from the methodology of
so-called {\it critical science} in which one supports ideological
complaints by selected citations.  Classical critical science was
generally concerned with (say) ``exposing the bourgeois character
of establishment scientific assumptions''.  The radicalism seems
to be faded in this book, but the methodology remains.